-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add anvil_{set,remove}BlockTimestampInterval
#442
Conversation
where | ||
<I as IntoIterator>::IntoIter: 'a, | ||
{ | ||
let guard = self.get_mut(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure it's a good idea? It gets an exclusive lock on the variable for potentially long period of time, restricting others from accessing it. Probably I would do something like fork/commit pattern, where at any time you can write the next value to the cache (aka fork), which will later be used by the consumer (block producer) when the time is right.
Something like (pseudocode):
enum TimeModeChange {
SetTimestamp(u64),
SetInterval(u64),
}
struct TimeManager {
fork: RwLock<Vec<TimeModeChange>>,
state: RwLock<TimeManagerInner>
}
impl AdvanceTime for TimeManager {
fn advance(&mut self) -> u64 {
// consume all the actions from the queue
// return the actual timestamp after calculations
}
}
This way ReadTime
will only have access to the "real" time available to the block producer, and the changes will be respected only after they are processed, but there is no single lock on the time manager state at any time, it's always short-living and we don't need to publicly expose it. I believe it's also not prone to TOCTOU since only the block producer is able to advance the state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree that fork/commit would be helpful here. Especially to make some of the mutating calls non-blocking.
The way I see it, however, there is a problem with the rest of the proposal.
I believe it's also not prone to TOCTOU since only the block producer is able to advance the state.
Assuming you mean that BlockProducer
is the only place that can advance state, then no that's not true. There are multiple entry points into InMemoryNode::seal_block
as some endpoints require us to do synchronous block production.
Consider a scenario where there are two concurrent calls to anvil_mine
: one with num=10,interval=100s
, another with num=10,interval=1000s
. Before this PR they would execute the following piece of code (simplified version from here):
for i in 0..num_blocks {
if i != 0 {
self.time.increase_time(interval);
}
self.seal_block([]);
}
There is an obvious problem here that each call's resulting blocks might not have exactly interval
time between them. This PR is an attempt to ensure that we apply offsets specifically for the next N
blocks we are about to produce and nothing else.
But even if we are okay with intervals bleeding into one another, there is a bigger problem inside seal_block
(again simplified version of this):
let inner = self.read_guard();
// uses `TimestampManager` to predict the next timestamp
let system_env = inner.create_system_env();
drop(inner);
...
let inner = self.write_guard();
// takes next timestamp from `TimestampManager` and advances it
let expected_timestamp = inner.time.next_timestamp();
assert(block.timestamp.as_u64() == expected_timestamp);
There is a TOCTOU here as a competing block production thread could have already consumed the timestamp you were "supposed" to own. Now, IMHO this is mostly InMemoryNode
's fault for being designed in such a way (the same exact problem also affects block/batch numbers and potentially even more things). I gave refactoring this a shot and this is a huge endeavour that I don't think I can spare time for right this moment.
To sum everything up, time
essentially acts as an exclusive lock for block production which solves the problems I am trying to solve in this PR. I agree that it comes with heavy cost, so thinking about this further I might not need a write lock for the entire duration. What I actually need is a read lock that can be atomically upgraded to a write lock in short bursts (assuming I can do a successful minor refactoring of seal_block
). I am not familiar with this pattern though so will have to look it up. It is also totally possible I misunderstood your proposal, so please elaborate if so.
396abd4
to
adbdcea
Compare
adbdcea
to
f7a548e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not the best solution to the problem, but I guess it's OK for time being
What 💻
Closes #423
This kind of snowballed out of control as I had to refactor time management in a way that unties it from relying on
+1
everywhere. The gist is that I introduced two new traits:TimeRead
to represent shared read-only view on timeTimeExclusive
to represent exclusive writeable view on timeSealing blocks requires a
TimeExclusive
instance to be passed now and we have some flexibility in what we can pass. This fixes a few TOCTOU bugs that we used to have (we rely on clock not progressing between us constructing the new block and applying it, which was not always the case before but it is not withTimeExclusive
). This also mademine_blocks
more robust as it enforces the provided offsets now.Additionally this PR removes time from snapshot state (See https://github.com/matter-labs/era-test-node/issues/414 why I think this is movement in the right direction)
New functionality is covered with an e2e test, but the refactoring is critically lacking in tests. I am open to adding some tests for the
time
module, but have already spent too much time on this so might just add this to the long list of TODOs.Why ✋
Feature parity with anvil